41 research outputs found
Disaggregating and Consolidating Network Functionalities
Resource disaggregation has gained huge popularity in recent years. Existing
works demonstrate how to disaggregate compute, memory, and storage resources.
We, for the first time, demonstrate how to disaggregate network resources by
proposing a new distributed hardware framework called SuperNIC. Each SuperNIC
connects a small set of endpoints and consolidates network functionalities for
these endpoints. We prototyped SuperNIC with FPGA and demonstrate its
performance and cost benefits with real network functions and customized
disaggregated applications
Dynamic evolution of COVID-19 on chest computed tomography: experience from Jiangsu Province of China
Objectives
To determine the patterns of chest computed tomography (CT) evolution according to disease severity in a large coronavirus disease 2019 (COVID-19) cohort in Jiangsu Province, China.
Methods
This retrospective cohort study was conducted from January 10, 2020, to February 18, 2020. All patients diagnosed with COVID-19 in Jiangsu Province were included, retrospectively. Quantitative CT measurements of pulmonary opacities including volume, density, and location were extracted by deep learning algorithm. Dynamic evolution of these measurements was investigated from symptom onset (day 1) to beyond day 15. Comparison was made between severity groups.
Results
A total of 484 patients (median age of 47 years, interquartile range 33–57) with 954 CT examinations were included, and each was assigned to one of the three groups: asymptomatic/mild (n = 63), moderate (n = 378), severe/critically ill (n = 43). Time series showed different evolution patterns of CT measurements in the groups. Following disease onset, posteroinferior subpleural area of the lung was the most common location for pulmonary opacities. Opacity volume continued to increase beyond 15 days in the severe/critically ill group, compared with peaking on days 13–15 in the moderate group. Asymptomatic/mild group had the lowest opacity volume which almost resolved after 15 days. The opacity density began to drop from day 10 to day 12 for moderately ill patients.
Conclusions
Volume, density, and location of the pulmonary opacity and their evolution on CT varied with disease severity in COVID-19. These findings are valuable in understanding the nature of the disease and monitoring the patient’s condition during the course of illness
I/O Is Faster Than the CPU - Let's Partition Resources and Eliminate (Most) OS Abstractions
Peer reviewe
A survey and classification of software-defined storage systems
The exponential growth of digital information is imposing increasing scale and efficiency demands on modern storage infrastructures. As infrastructure complexity increases, so does the difficulty in ensuring quality of service, maintainability, and resource fairness, raising unprecedented performance, scalability, and programmability challenges. Software-Defined Storage (SDS) addresses these challenges by cleanly disentangling control and data flows, easing management, and improving control functionality of conventional storage systems. Despite its momentum in the research community, many aspects of the paradigm are still unclear, undefined, and unexplored, leading to misunderstandings that hamper the research and development of novel SDS technologies. In this article, we present an in-depth study of SDS systems, providing a thorough description and categorization of each plane of functionality. Further, we propose a taxonomy and classification of existing SDS solutions according to different criteria. Finally, we provide key insights about the paradigm and discuss potential future research directions for the field.This work was financed by the Portuguese funding agency FCT-Fundacao para a Ciencia e a Tecnologia through national funds, the PhD grant SFRH/BD/146059/2019, the project ThreatAdapt (FCT-FNR/0002/2018), the LASIGE Research Unit (UIDB/00408/2020), and cofunded by the FEDER, where applicable
Recommended from our members
Distributing and Disaggregating Hardware Resources in Data Centers
Hardware resource disaggregation is a solution that decomposes general-purpose monolithic servers into segregated, network-attached resource pools, each of which can be built, managed, and scaled independently. Despite its management, cost, and fault-tolerance benefits, hardware resource disaggregation is a drastic departure from the traditional computing paradigm and it calls for a top-down redesign on system software, hardware, and data center networks.This dissertation shows that it is possible to overcome the challenges of building and deploying hardware resource disaggregation solutions in real data centers, delivering its promises on better manageability, scalability, and cost.We first explored logical resource disaggregation for emerging persistent memory technologies. Logical resource disaggregation logically breaks the server boundary by building an indirection layer on top of monolithic servers to collectively expose a logical resource pool abstraction. However, we fail to overcome the inherent problems of monolithic servers. We then explored hardware resource disaggregation to overcome these limitations by physically separating hardware resources into network-attached pools. We emulated disaggregated devices using monolithic servers and built the first operating system designed for managing disaggregated resources. It provides backward compatible interfaces while delivering good performance. However, emulation incurs non-trivial overhead and has limited parallelism in serving highly-concurrent requests. To avoid such overhead, we then built the first publicly known hardware-based disaggregated memory device, which co-designs networking transport, virtual memory, and hardware. We soon realized that while an increasing amount of effort goes into disaggregating compute, memory, and storage, the network has been completely left out. The final piece of this dissertation proposes the concept of network disaggregation, which decouples network functionalities from endpoints and then consolidates them into a centralized network resource pool. We built a new hardware-based networking device along with a distributed runtime system to realize such a network resource pool. Together, these four pieces outline a practical path to enable hardware resource disaggregation solutions in real data centers, especially how one can navigate the complex trade-offs among performance, cost, and manageability
A novel method applied in determination and assessment of trace amount of lead and cadmium in rice from four provinces, China.
Heavy metal contamination of soils or water can lead to excessive lead (Pb) and cadmium (Cd) levels in rice. As cumulative poisons, consumption of Pb and Cd in contaminated rice may cause many toxic effects in humans. In the present study, Pb and Cd levels in rice samples from Hubei, Jiangxi, Heilongjiang, and Guangdong provinces in China were analyzed by cloud point extraction and graphite furnace atomic absorption spectrometry (GFAAS). The heavy metals in the rice samples were reacted with 8-quinolinol to form a complex at pH 9.0 and 40°C. Analytes were quantitatively extracted to a surfactant-rich phase (Triton X-45) after centrifugation and analyzed by GFAAS. The effects of experimental conditions, including pH, concentration of reagents, and equilibration time and temperature, on cloud point extraction were optimized efficiently using Plackett-Burman and Box-Behnken experimental designs. Under the optimum conditions, good linearity was observed in the concentration ranges of 0.5-5 µg/L for Pb and 0.05-0.50 µg/L for Cd. The limits of detection were 0.043 µg/L for Pb with a concentration factor of 24.2 in a 10 mL sample and 0.018 µg/L for Cd with a concentration factor of 18.4 in a 10 mL sample. Twenty rice samples from four provinces were analyzed successfully, and the mean levels of Pb and Cd in the rice were all below their maximum allowable concentrations in China. Comparing the tolerable daily intakes given by FAO/WHO with the mean estimated daily intakes; Pb and Cd mean daily intake through rice consumption were 0.84 µg/kg bw/day and 0.40 µg/kg bw/day, which were lower than the tolerable daily intakes
Approximate message passing for multi-layer estimation in rotationally invariant models
We consider the problem of reconstructing the signal and the hidden variables from observations coming from a multi-layer network with rotationally invariant weight matrices. The multi-layer structure models inference from deep generative priors, and the rotational invariance imposed on the weights generalizes the i.i.d. Gaussian assumption by allowing for a complex correlation structure, which is typical in applications. In this work, we present a new class of approximate message passing (AMP) algorithms and give a state evolution recursion which precisely characterizes their performance in the large system limit. In contrast with the existing multi-layer VAMP (ML-VAMP) approach, our proposed AMP – dubbed multilayer rotationally invariant generalized AMP (ML-RI-GAMP) – provides a natural generalization beyond Gaussian designs, in the sense that it recovers the existing Gaussian AMP as a special case. Furthermore, ML-RI-GAMP exhibits a significantly lower complexity than ML-VAMP, as the computationally intensive singular value decomposition is replaced by an estimation of the moments of the design matrices. Finally, our numerical results show that this complexity gain comes at little to no cost in the performance of the algorithm
Swin Transformer Embedding Dual-Stream for Semantic Segmentation of Remote Sensing Imagery
The acquisition of global context and boundary information is crucial for the semantic segmentation of remote sensing (RS) images. In contrast to convolutional neural networks (CNNs), transformers exhibit superior performance in global modeling and shape feature encoding, which provides a novel avenue for obtaining global context and boundary information. However, current methods fail to effectively leverage these distinctive advantages of transformers. To address this issue, we propose a novel single encoder and dual decoders architecture called STDSNet, which embeds the Swin transformer into the dual-stream network for semantic segmentation of RS imagery. The proposed STDSNet employs the Swin transformer as the network backbone in the encoder to address the limitations of CNNs in global modeling and encoding shape features. The dual decoder comprises two parallel streams, namely the global stream (GS) and the shape stream (SS). The GS utilizes the global context fusion module (GCFM) to address the loss of global context during upsampling. It further integrates GCFMs with skip connections and a multiscale fusion strategy to mitigate large-scale regional object classification errors resulting from similar features or shadow occlusion in RS images. The SS introduces the gate convolution module (GCM) to filter out irrelevant features, allowing it to focus on processing boundary information, which improves the semantic segmentation performance of small targets and their boundaries in RS images. Extensive experiments demonstrate that STDSNet outperforms other state-of-the-art methods on the ISPRS Vaihingen and Potsdam benchmarks
Contention-aware application performance prediction for disaggregated memory systems
Disaggregated memory has recently been proposed as a way to allow flexible and fine-grained allocation of memory capacity to compute jobs. This paper makes an important step towards effective resource allocation on disaggregated memory systems. Specifically, we propose a generic approach to predict the performance degradation due to sharing of disaggregated memory. In contrast to prior work, cache capacity is not shared among multiple applications, which removes a major contributor to application performance. For this reason, our analysis is driven by the demand for memory bandwidth, which has been shown to have an important effect on application performance. We show that profiling the application slowdown often involves significant experimental error and noise, and to this end, we improve the accuracy by linear smoothing of the sensitivity curves. We also show that contention is sensitive to the ratio between read and write memory accesses, and we address this sensitivity by building a family of sensitivity curves according to the read/write ratios. Our results show that the methodology predicts the slowdown in application performance subject to memory contention with an average error of 1.19% and max error of 14.6%. Compared with state-of-the-art, the relative improvements are almost 24% on average and 33% for the worst case.This work is part of a project that has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 754337 (EuroEXA); it has been supported by the Spanish Ministry of Science and Innovation (project TIN2015-65316-P and Ramon y Cajal fellowship RYC2018-025628-I), Generalitat de Catalunya (contracts 2014-SGR-1051 and 2014-SGR1272), and the Severo Ochoa Programme (SEV-2015-0493).Peer Reviewe